90 research outputs found
Transition to chaos in random neuronal networks
Firing patterns in the central nervous system often exhibit strong temporal
irregularity and heterogeneity in their time averaged response properties.
Previous studies suggested that these properties are outcome of an intrinsic
chaotic dynamics. Indeed, simplified rate-based large neuronal networks with
random synaptic connections are known to exhibit sharp transition from fixed
point to chaotic dynamics when the synaptic gain is increased. However, the
existence of a similar transition in neuronal circuit models with more
realistic architectures and firing dynamics has not been established.
In this work we investigate rate based dynamics of neuronal circuits composed
of several subpopulations and random connectivity. Nonzero connections are
either positive-for excitatory neurons, or negative for inhibitory ones, while
single neuron output is strictly positive; in line with known constraints in
many biological systems. Using Dynamic Mean Field Theory, we find the phase
diagram depicting the regimes of stable fixed point, unstable dynamic and
chaotic rate fluctuations. We characterize the properties of systems near the
chaotic transition and show that dilute excitatory-inhibitory architectures
exhibit the same onset to chaos as a network with Gaussian connectivity.
Interestingly, the critical properties near transition depend on the shape of
the single- neuron input-output transfer function near firing threshold.
Finally, we investigate network models with spiking dynamics. When synaptic
time constants are slow relative to the mean inverse firing rates, the network
undergoes a sharp transition from fast spiking fluctuations and static firing
rates to a state with slow chaotic rate fluctuations. When the synaptic time
constants are finite, the transition becomes smooth and obeys scaling
properties, similar to crossover phenomena in statistical mechanicsComment: 28 Pages, 12 Figures, 5 Appendice
Classification and Geometry of General Perceptual Manifolds
Perceptual manifolds arise when a neural population responds to an ensemble
of sensory signals associated with different physical features (e.g.,
orientation, pose, scale, location, and intensity) of the same perceptual
object. Object recognition and discrimination requires classifying the
manifolds in a manner that is insensitive to variability within a manifold. How
neuronal systems give rise to invariant object classification and recognition
is a fundamental problem in brain theory as well as in machine learning. Here
we study the ability of a readout network to classify objects from their
perceptual manifold representations. We develop a statistical mechanical theory
for the linear classification of manifolds with arbitrary geometry revealing a
remarkable relation to the mathematics of conic decomposition. Novel
geometrical measures of manifold radius and manifold dimension are introduced
which can explain the classification capacity for manifolds of various
geometries. The general theory is demonstrated on a number of representative
manifolds, including L2 ellipsoids prototypical of strictly convex manifolds,
L1 balls representing polytopes consisting of finite sample points, and
orientation manifolds which arise from neurons tuned to respond to a continuous
angle variable, such as object orientation. The effects of label sparsity on
the classification capacity of manifolds are elucidated, revealing a scaling
relation between label sparsity and manifold radius. Theoretical predictions
are corroborated by numerical simulations using recently developed algorithms
to compute maximum margin solutions for manifold dichotomies. Our theory and
its extensions provide a powerful and rich framework for applying statistical
mechanics of linear classification to data arising from neuronal responses to
object stimuli, as well as to artificial deep networks trained for object
recognition tasks.Comment: 24 pages, 12 figures, Supplementary Material
Globally Gated Deep Linear Networks
Recently proposed Gated Linear Networks present a tractable nonlinear network
architecture, and exhibit interesting capabilities such as learning with local
error signals and reduced forgetting in sequential learning. In this work, we
introduce a novel gating architecture, named Globally Gated Deep Linear
Networks (GGDLNs) where gating units are shared among all processing units in
each layer, thereby decoupling the architectures of the nonlinear but unlearned
gatings and the learned linear processing motifs. We derive exact equations for
the generalization properties in these networks in the finite-width
thermodynamic limit, defined by , where P
and N are the training sample size and the network width respectively. We find
that the statistics of the network predictor can be expressed in terms of
kernels that undergo shape renormalization through a data-dependent matrix
compared to the GP kernels. Our theory accurately captures the behavior of
finite width GGDLNs trained with gradient descent dynamics. We show that kernel
shape renormalization gives rise to rich generalization properties w.r.t.
network width, depth and L2 regularization amplitude. Interestingly, networks
with sufficient gating units behave similarly to standard ReLU networks.
Although gatings in the model do not participate in supervised learning, we
show the utility of unsupervised learning of the gating parameters.
Additionally, our theory allows the evaluation of the network's ability for
learning multiple tasks by incorporating task-relevant information into the
gating units. In summary, our work is the first exact theoretical solution of
learning in a family of nonlinear networks with finite width. The rich and
diverse behavior of the GGDLNs suggests that they are helpful analytically
tractable models of learning single and multiple tasks, in finite-width
nonlinear deep networks
- β¦